Goto

Collaborating Authors

 error surface


Phase Transitions between Accuracy Regimes in L2 regularized Deep Neural Networks

Ersoy, Ibrahim Talha, Wiesner, Karoline

arXiv.org Artificial Intelligence

Increasing the L2 regularization of Deep Neural Networks (DNNs) causes a first-order phase transition into the under-parametrized phase -- the so-called onset-of learning. We explain this transition via the scalar (Ricci) curvature of the error landscape. We predict new transition points as the data complexity is increased and, in accordance with the theory of phase transitions, the existence of hysteresis effects. We confirm both predictions numerically. Our results provide a natural explanation of the recently discovered phenomenon of '\emph{grokking}' as DNN models getting stuck in a local minimum of the error surface, corresponding to a lower accuracy phase. Our work paves the way for new probing methods of the intrinsic structure of DNNs in and beyond the L2 context.



Identifying and attacking the saddle point problem in high-dimensional non-convex optimization

Yann N. Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, Yoshua Bengio

Neural Information Processing Systems

A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, neural network theory, and empirical evidence, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new approach to second-order optimization, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep or recurrent neural network training, and provide numerical evidence for its superior optimization performance.


FissionFusion: Fast Geometric Generation and Hierarchical Souping for Medical Image Analysis

Sanjeev, Santosh, Zhaksylyk, Nuren, Almakky, Ibrahim, Hashmi, Anees Ur Rehman, Qazi, Mohammad Areeb, Yaqub, Mohammad

arXiv.org Artificial Intelligence

The scarcity of well-annotated medical datasets requires leveraging transfer learning from broader datasets like ImageNet or pre-trained models like CLIP. Model soups averages multiple fine-tuned models aiming to improve performance on In-Domain (ID) tasks and enhance robustness against Out-of-Distribution (OOD) datasets. However, applying these methods to the medical imaging domain faces challenges and results in suboptimal performance. This is primarily due to differences in error surface characteristics that stem from data complexities such as heterogeneity, domain shift, class imbalance, and distributional shifts between training and testing phases. To address this issue, we propose a hierarchical merging approach that involves local and global aggregation of models at various levels based on models' hyperparameter configurations. Furthermore, to alleviate the need for training a large number of models in the hyperparameter search, we introduce a computationally efficient method using a cyclical learning rate scheduler to produce multiple models for aggregation in the weight space. Our method demonstrates significant improvements over the model souping approach across multiple datasets (around 6% gain in HAM10000 and CheXpert datasets) while maintaining low computational costs for model generation and selection. Moreover, we achieve better results on OOD datasets than model soups.


Second Order Properties of Error Surfaces: Learning Time and Generalization

Neural Information Processing Systems

The learning time of a simple neural network model is obtained through an analytic computation of the eigenvalue spectrum for the Hessian matrix, which describes the second order properties of the cost function in the space of coupling coefficients. The form of the eigenvalue distribution suggests new techniques for accelerating the learning process, and provides a theoretical justification for the choice of centered versus biased state variables.


Calculus in Machine Learning: Why it Works

#artificialintelligence

Calculus is one of the core mathematical concepts in machine learning that permits us to understand the internal workings of different machine learning algorithms. One of the important applications of calculus in machine learning is the gradient descent algorithm, which, in tandem with backpropagation, allows us to train a neural network model. In this tutorial, you will discover the integral role of calculus in machine learning. Calculus in Machine Learning: Why it Works Photo by Hasmik Ghazaryan Olson, some rights reserved. A neural network model, whether shallow or deep, implements a function that maps a set of inputs to expected outputs.


What is Deep Learning and How Does it Work?

#artificialintelligence

At a very basic level, deep learning is a machine learning technique. It teaches a computer to filter inputs through layers to learn how to predict and classify information. Observations can be in the form of images, text, or sound. The inspiration for deep learning is the way that the human brain filters information. Its purpose is to mimic how the human brain works to create some real magic. In the human brain, there are about 100 billion neurons. Each neuron connects to about 100,000 of its neighbors.


Calculus in Machine Learning: Why it Works

#artificialintelligence

Calculus is one of the core mathematical concepts in machine learning that permits us to understand the internal workings of different machine learning algorithms. One of the important applications of calculus in machine learning is the gradient descent algorithm, which, in tandem with backpropagation, allows us to train a neural network model. In this tutorial, you will discover the integral role of calculus in machine learning. Calculus in Machine Learning: Why it Works Photo by Hasmik Ghazaryan Olson, some rights reserved. A neural network model, whether shallow or deep, implements a function that maps a set of inputs to expected outputs.


What is Deep Learning and How Does it Work?

#artificialintelligence

Sit back, relax, and get comfortable with cool concepts like artificial neural networks, gradient descent, backpropagation, and more. The inspiration for deep learning is the way that the human brain filters information. At a very basic level, Deep Learning is a Machine Learning technique. It teaches a computer to filter inputs through layers to learn how to predict and classify information. Observations can be in the form of images, text, or sound. The inspiration for Deep Learning is the way that the human brain filters information. Its purpose is to mimic how the human brain works to create some real magic.


What is Deep Learning and How Does it Work?

#artificialintelligence

At a very basic level, deep learning is a machine learning technique. It teaches a computer to filter inputs through layers to learn how to predict and classify information. Observations can be in the form of images, text, or sound. The inspiration for deep learning is the way that the human brain filters information. Its purpose is to mimic how the human brain works to create some real magic. In the human brain, there are about 100 billion neurons. Each neuron connects to about 100,000 of its neighbors.